Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix AutoAWQQuantizer GptqQuantizer supported ep and device #1571

Merged
merged 1 commit into from
Feb 7, 2025
Merged

Conversation

xiaoyu-work
Copy link
Contributor

@xiaoyu-work xiaoyu-work commented Jan 24, 2025

Describe your changes

Fix AutoAWQQuantizer GptqQuantizer supported ep and device.

Checklist before requesting a review

  • Add unit tests for this change.
  • Make sure all tests can pass.
  • Update documents if necessary.
  • Lint and apply fixes to your code by running lintrunner -a
  • Is this a user-facing change? If yes, give a description of this change to be included in the release notes.
  • Is this PR including examples changes? If yes, please remember to update example documentation in a follow-up PR.

(Optional) Issue link

@@ -228,15 +228,15 @@
},
"AutoAWQQuantizer": {
"module_path": "olive.passes.pytorch.autoawq.AutoAWQQuantizer",
"supported_providers": [ "CPUExecutionProvider" ],
"supported_accelerators": [ "cpu" ],
"supported_providers": [ "CUDAExecutionProvider" ],
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe we should make it * for both providers and ep. the quantization happens on pytorch model and the exported model is compatible with all eps.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

their only requirement is that they require gpus to run. but that is a host machine requirement, not target provider or ep.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this supported_providers mean the output model supports which ep? Same concept for supported_accelerators?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes. it's used by the auto-opt to filter out the passes based on the intended target ep

Copy link
Contributor Author

@xiaoyu-work xiaoyu-work Jan 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see. I can see some other passes like QNNConversion is also in this list. But QNN model doesn't not have any ep concept. If this is only used by auto-opt, and auto-opt is targeting onnx model, should those unrelated pass whose output model is not onnx model be removed here?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

QNN model does not have EP concept but you don't want to run QNNConversion pass if the user did not select QNN_EP.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this supported_providers mean the output model supports which ep? Same concept for supported_accelerators?

Yes, these fields dictate the target provider and accelerator for running the output model. We do not have a parallel support for host machine and depend on Pass.validate_config to make that decision.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see. I can see some other passes like QNNConversion is also in this list. But QNN model doesn't not have any ep concept. If this is only used by auto-opt, and auto-opt is targeting onnx model, should those unrelated pass whose output model is not onnx model be removed here?

@shaahji what about this? i can see all passes needs to be registered here anyway?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, all passes needs to be registered in this config since this is the only way to create a pass by importing modules.

@jambayk jambayk merged commit 5748965 into main Feb 7, 2025
24 checks passed
@jambayk jambayk deleted the xiaoyu/cfg branch February 7, 2025 06:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants